Picture for Minhui Xue

Minhui Xue

What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift

Add code
Apr 28, 2025
Viaarxiv icon

Whispering Under the Eaves: Protecting User Privacy Against Commercial and LLM-powered Automatic Speech Recognition Systems

Add code
Apr 01, 2025
Viaarxiv icon

From Pixels to Trajectory: Universal Adversarial Example Detection via Temporal Imprints

Add code
Mar 06, 2025
Viaarxiv icon

CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization

Add code
Jan 29, 2025
Figure 1 for CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization
Figure 2 for CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization
Figure 3 for CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization
Figure 4 for CAMP in the Odyssey: Provably Robust Reinforcement Learning with Certified Radius Maximization
Viaarxiv icon

AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems

Add code
Nov 09, 2024
Figure 1 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 2 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 3 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Figure 4 for AI-Compass: A Comprehensive and Effective Multi-module Testing Tool for AI Systems
Viaarxiv icon

Reconstruction of Differentially Private Text Sanitization via Large Language Models

Add code
Oct 16, 2024
Figure 1 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 2 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 3 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Figure 4 for Reconstruction of Differentially Private Text Sanitization via Large Language Models
Viaarxiv icon

Edge Unlearning is Not "on Edge"! An Adaptive Exact Unlearning System on Resource-Constrained Devices

Add code
Oct 15, 2024
Viaarxiv icon

Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems

Add code
Jul 11, 2024
Figure 1 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 2 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 3 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Figure 4 for Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems
Viaarxiv icon

QUEEN: Query Unlearning against Model Extraction

Add code
Jul 01, 2024
Figure 1 for QUEEN: Query Unlearning against Model Extraction
Figure 2 for QUEEN: Query Unlearning against Model Extraction
Figure 3 for QUEEN: Query Unlearning against Model Extraction
Figure 4 for QUEEN: Query Unlearning against Model Extraction
Viaarxiv icon

On Security Weaknesses and Vulnerabilities in Deep Learning Systems

Add code
Jun 12, 2024
Figure 1 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 2 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 3 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Figure 4 for On Security Weaknesses and Vulnerabilities in Deep Learning Systems
Viaarxiv icon